R codes for Yang et al 2019 NEE
R codes for Yang et al 2019 NEE
Please contact Qiang Yang by qiang.yang@uni-konstanz.de for code qustions
A short introduction
This document contains the R codes for Yang, Q. et al. The predictability of ecological stability in a noisy world. Nat. Ecol. Evol. Specifically, the codes here were used to:
- construct 4-species foodwebs that are both locally stable and biologically feasible
- generate time series of stochastic environment
- simulate the dynamics of the foodwebs in the stochastic environment
- quantify different ecological stability components
- analyse the stability data and plot the result
R packages used
library(plyr) # data wrangling
library(tidyverse) # a suit of packages for data wrangling and visualization
library(reshape2) # data wrangling
library(tseries) # contains a function to test normality
library(grid) # organize graphics layout
library(gridExtra) # miscellaneous functions for "Grid" graphics
library(knitr) # for knitting Rmd document # You do not need this package for the reproduction of this work
library(ranger) # a fast implementation of random forests
library(RColorBrewer) # to create colourful graphs with pre-made color palettes
library(deSolve) # solvers for initial value problems of DENote that there are some namespace conflicts between package plyr and package dplyr (embedded in tidyverse). I used PackageName::FunctionName() to resolve it.
Construct food-webs
Food-web modules
We adopted a total of 14 four-species modules that differ in the number of trophic levels, connectance, number of basal species, number of omnivorous species, etc. For each module, we constructed 100 replicate communities by randomly sampling the food-web parameter from biologically reasonable ranges. The communities constructed are locally stable and biologically feasible.
Note that, in this document, there are two types of module ID. The graph below shows the matching between the module ID in most sections of this coding document (old.motif.id, blue text under each module) and the module ID in the paper (new.motif.id, black text under each module). From the section construct food-webs to stability quantification, I will always use the old.motif.id. In the section analysis and illustration of results, I will match the old.motif.id with the new.motif.id to reproduce the figures in the paper.
Interaction strength coefficients
Intraspecific and interspecific interaction coefficients of the food-webs were generated according to Petchey, O. L. et al. Trophically unique species are vulnerable to cascading extinction. Am. Nat. 171, 568-579 (2008). For each food-web module, we defined a function to randomly generate a matrix with the interaction coefficients as matrix entries. But before defining the function, we loaded the adjacent matrix of the modules. The off-diagonal elements of the adjacent matrix represent the consumer-resource relationship and direction, with \(1\) representing the resource\(\,\to\,\)consumer relationship and \(-1\) representing the consumer\(\,\to\,\)resource relationship. Value \(1\) in the diagonal elements indicates the intraspecific competition between basal species. Note that, however, although the diagonal value for non-basal species is \(0\) for now, that does not exclude the intraspecific competition between the individuals of the non-basal species (i.e. we also assigned intraspecific interaction for the non-basal species; see the function below).
load(file="adjacent_matrix_list.Rdata")Then we defined the function for generating interaction coefficient matrix. The input argument motif.id in the function is the id of any one of the 14 modules (old.motif.id).
A.matrix.f<-function(motif.id){
adjacent_matrix<-adjacent_matrix_list[[motif.id]]
A_matrix <- matrix(NA, 4, 4)
# assign the diagonal entries - intraspecific competiton
diagonal.elements <- diag(adjacent_matrix)
diagonal.elements[diagonal.elements==0] <- -0.1
diag(A_matrix) <- diagonal.elements
# assign the interspecific competition between basal species
# how many basal species
basal.species.n <- length(which(diagonal.elements == -1))
# all entries are sampled from the uniform distribution [-0.5,0]
A_matrix_sub <- matrix(runif(basal.species.n^2,min=-0.5,max=0),basal.species.n,basal.species.n)
# change the diagonal entries of basal species to -1
diag(A_matrix_sub) <--1
A_matrix[1:basal.species.n,1:basal.species.n] <- A_matrix_sub
# assign the consumer-resource interaction coefficient
for(i in (basal.species.n+1):4){ # basal species should not be a consumer
# how many consumer links
possible_links_from_species_i_to_j <- adjacent_matrix[1:(i-1), i]
N_possible_links_from_species_i_to_j <- length(which(possible_links_from_species_i_to_j==-1))
if(N_possible_links_from_species_i_to_j == 1){
possible_links_from_species_i_to_j[possible_links_from_species_i_to_j==-1] <- -0.5
}else{
consumer_interaction_value <- c(-0.4, rep(-0.1/(N_possible_links_from_species_i_to_j-1), N_possible_links_from_species_i_to_j-1))
permute_value <- sample(consumer_interaction_value, length(consumer_interaction_value))
possible_links_from_species_i_to_j[possible_links_from_species_i_to_j==-1] <- permute_value
}
A_matrix[1:(i-1), i] <- possible_links_from_species_i_to_j
}
# assign the resource-consumer interaction coefficiens
# generate the energy conversion rate matrix first
conversion_matrix <- A_matrix
conversion_matrix[!is.na(conversion_matrix)] <- 1
conversion_matrix[is.na(conversion_matrix)] <- -0.2
if(motif.id %in% c(7,10)){
conversion_matrix[4,1] <- -0.02
}else
if(motif.id %in% c(8,9)){
conversion_matrix[3,1] <- -0.02
}else
if(motif.id %in% c(11,13)){
conversion_matrix[3,1] <- -0.02
conversion_matrix[4,1] <- -0.02
}else
if(motif.id == 12){
conversion_matrix[4,1] <- -0.02
conversion_matrix[4,2] <- -0.02
}else
if(motif.id == 6){
conversion_matrix[4,2] <- -0.02
}else{
conversion_matrix <- conversion_matrix
}
# to obtain resource-consumer interaction
for(m in 1:4){
for(n in 1:4){
A_matrix[m,n] <- ifelse(is.na(A_matrix[m,n]),
conversion_matrix[m,n]*A_matrix[n,m],A_matrix[m,n])
}
}
return(A_matrix)
}The intrinsic growth/decay rate of each species
- For each species in a food-web, the intrinsic growth/decay rate was randomly generated.
- The basal species was always assigned with a positive intrinsic growth rate.
- The non-basal species was always assigned with a negative decay rate, and its growth is merely supported by its resource.
- The intrinsic growth rate of the basal species was set at 1.
- The intrinsic decay rate of the non-basal species were sampled from -1*[0.001, 0.1].
- The intrinsic decay rate of the non-basal species were ranked so that the species at higher trophic levels have slower decay rates (i.e. smaller absolute values).
- The input argument motif.id of the function is the id of the module (old.motif.id).
R<-c(0,0,0,0)
assign.R.func<-function(motif.id){
adjacent_matrix<-adjacent_matrix_list[[motif.id]]
R<-diag(adjacent_matrix)*(-1)
basalN<-length(which(R==1))
R[(basalN+1):4] <- -sort(runif((4-basalN),0.001,0.1),decreasing = TRUE)
return(R)
}Local stability
- Each food-web was finally selected to meet the requirement of both local stability and feasibility.
- Local stability requires that the real part of all the eigenvalues of the Jacobian matrix of the system are negative.
- Feasibility (biologically) requires that all the species of the system have positive equilibrium densities.
## For a given jacobian matrix of one community, check its local stability;
## for local stability, return 1; ## for non-local-stability, return 0;
check_local_stability<-function(jacobian_matrix){
eigen.values.real.part<-Re(eigen(jacobian_matrix)$values)
local_stability<-ifelse(all(eigen.values.real.part<0),1,0)
return(local_stability)
}Generate foodwebs with feasibility and local stability
- We generated 100 food webs with local stability and feasibility for each of the 14 motifs.
- Note: the food webs were generated in a computer cluster (TCHPC) with different random seeds, so the food webs generated using another computer should be different with the food webs of this project (But this will not change the conclusion of the paper !). I uploaded the food webs of this project (mydata_foodweb_set_new_method.Rdata) in case the readers want to reproduce the results in the paper. If the readers want to generate their own foodwebs and want to reproduce their result, they are suggested to save their generated food webs or use function set.seed().
- Define a function to generate N food-webs for a given motif (again, old.motif.id)
- The input argument motif.id of the function is the id of the module (old.motif.id).
- The input N of the function is the number of communities generated for each module.
generation_stable_foodwebs_func<-function(motif.id, N){
k<-0
mydata_foodweb<-list()
while(k<N){
A_matrix<-A.matrix.f(motif.id) # interaction coefficient matrix
R<-assign.R.func(motif.id) # intrinsic growth/decay rate
# the equilibrium biomass/density
Neq <- tryCatch(solve(A_matrix,-R),
error=function(err) c(-1,-1,-1,-1))
J <- A_matrix*Neq # Jacobian matrix
L <- unlist(eigen(J)$values) # eigenvalues of the Jacobian matrix
maxreL <- max(Re(L)) # max real part of Jacobian matrix
ct <- all(Neq>0) & maxreL < (-0.005) & maxreL > - 0.1 # check for local stability and feasibility
ct <- as.numeric(ct)
if(ct==1){
k<-k+1
community<-list(J,R,A_matrix,Neq,maxreL)
names(community)<-c("jacobian_matrix","R","A_matrix","Neq",'maxreL')
mydata_foodweb[[k]]<-community
}else{
k<-k
}
}
return(mydata_foodweb)
}- generate the food webs for this project
motifs<-1:14
mydata_foodweb_set<-list()
for(m in motifs){
mydata_foodweb<-generation_stable_foodwebs_func(m,100)
mydata_foodweb_set[[m]]<-mydata_foodweb
}
save(mydata_foodweb_set,file='mydata_foodweb_set_new_method.Rdata')The finally constructed food webs
The distribution of the real part of the maximum eigenvalue of the food-webs of each module
Note that I matched the old.motif.id with the new.motif.id, so the module ID here is the new module ID, i.e. the module ID in the paper.
# prepare the data for ploting
load(file='mydata_foodweb_set_new_method.Rdata')
df <- expand.grid(1:14, 1:100, NA, NA, NA, NA, NA)
df <- as.data.frame(df)
colnames(df) <- c('old.motif', 'foodweb.id',
'maxRel', 'neq1', 'neq2', 'neq3', 'neq4')
for(i in 1:nrow(df)){
m <- df[i, 1]; n <- df[i, 2]
mydata_foodweb <- mydata_foodweb_set[[m]][[n]]
df[i, 3:7] <- c(mydata_foodweb$maxreL, mydata_foodweb$Neq)
}
df2 <- data.frame(
new.motif = c(1:14),
old.motif = c(1,14,2:13)
)
df <- left_join(df, df2, by='old.motif')
p.eigen.distribution <- ggplot(df, aes(maxRel)) +
geom_density(fill = 'coral',color='coral') +
facet_wrap(~new.motif, scales = "free",ncol = 2) +
theme_bw() +
theme(legend.position = 'none',
strip.text = element_text(size=12),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
axis.title.x = element_text(colour="black",size=14),
axis.title.y = element_text(colour="black",size=14),
axis.text.x = element_text(colour="black",size=9),
axis.text.y = element_text(colour="black",size=9),
panel.border = element_rect(colour = "black", fill=NA, size=0.5)) +
ylab('Density')+
xlab('The largest real part of the eigenvalue')
p.eigen.distributionThe equilibrium density of each species of the food-webs of each module
This figure shows that the species density of the food-webs used in this project follow the pyramid structure.
load(file='mydata_foodweb_set_new_method.Rdata')
df <- expand.grid(1:14, 1:100, NA, NA, NA, NA)
df <- as.data.frame(df)
colnames(df) <- c('old.motif', 'foodweb.id','neq1', 'neq2', 'neq3', 'neq4')
for(i in 1:nrow(df)){
m <- df[i, 1]; n <- df[i, 2]
mydata_foodweb <- mydata_foodweb_set[[m]][[n]]
df[i, 3:6] <- c(mydata_foodweb$Neq)
}
df2 <- data.frame(
new.motif = c(1:14),
old.motif = c(1,14,2:13)
)
df <- left_join(df, df2, by='old.motif')
df <- select(df, -old.motif)
colnames(df) <- c('foodweb.id',1:4, 'new.motif')
df2 <- melt(df, id.vars=c("new.motif", "foodweb.id"))
df2 <- mutate(df2, variable = as.numeric(variable))
p.equilibrium.density<-ggplot(df2,
aes(x=as.factor(variable), y=value)) +
geom_boxplot(color='coral') +
scale_color_gradient(low = "white", high = "red")+
facet_wrap(~new.motif,ncol = 4, scales = "free_y") +
theme_bw() +
theme(legend.position = 'none',
strip.text = element_text(size=12),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
axis.title.x = element_text(colour="black",size=14),
axis.title.y = element_text(colour="black",size=14),
axis.text.x = element_text(colour="black",size=9),
axis.text.y = element_text(colour="black",size=9),
panel.border = element_rect(colour = "black", fill=NA, size=0.5))+
ylab('Equilibrium density') + xlab('Species ID')
p.equilibrium.densityThe intrinsic decay rate of non-basal species
load(file='mydata_foodweb_set_new_method.Rdata')
#graph data
df <-
expand.grid(1:14, 1:100, NA, NA, NA, NA) %>%
as.data.frame() %>%
magrittr::set_names(c('old.motif', 'foodweb.id','r1', 'r2', 'r3', 'r4'))
#assign values
for(i in 1:nrow(df)){
m <- df[i, 1]; n <- df[i, 2]
mydata_foodweb <- mydata_foodweb_set[[m]][[n]]
df[i, 3:6] <- c(mydata_foodweb$R)
}
#match to the new moule id in the paper
df2 <- data.frame(
new.motif = c(1:14),
old.motif = c(1,14,2:13)
)
#join df and df2
#fiter out the R of the basal species as it is always 1
df <-
left_join(df, df2, by='old.motif') %>%
dplyr::select(-old.motif) %>%
magrittr::set_names(c('foodweb.id',1:4, 'new.motif')) %>%
melt(id.vars=c("new.motif", "foodweb.id")) %>%
mutate(variable = as.numeric(variable)) %>%
filter(value<0)
# graphics
p.intrinsic.decay.rate <- ggplot(df, aes(x=as.factor(variable), y=value)) +
geom_boxplot(color='coral') +
facet_wrap(~new.motif,ncol = 4) +
theme_bw() +
theme(legend.position = 'none',
strip.text = element_text(size=12),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
axis.title = element_text(colour="black",size=14),
axis.text = element_text(colour="black",size=9),
panel.border = element_rect(colour = "black", fill=NA, size=0.5))+
ylab('Intrinsic decay rate') + xlab('Species ID')
p.intrinsic.decay.rateEnvironmental stochasticity
Scenarios of environmental stochasticity
- A set of stochasticity scenarios were generated by control the gradient of the autocorrelation coefficient of the stochastic environment and the correlation between species’ responses.
- In this project, the autocorrelation coefficient were set to -0.8, -0.4, 0, 0.4, and 0.8, covering the noise color from blue to red.
- The correlation between species’ response were set to 0.2, 0.5, and 0.8, representing weak, intermediate, and strong correlations.
- For each combination of the two factors above, 50 replicates were made. This generated 750 scenarios of environmental stochasticity.
- For each of these 750 scenarios, spectral mimicry was performed to generate a new stochasticity, for which the distribution of the time series was controlled to follow normal distribution.
- Therefore, a total of 1500 scenarios of environmental stochasticity were generated.
interval_choice <- 1
k_choice <- c(-0.8,-0.4,0,0.4,0.8)
p_choice <- c(0.2,0.5,0.8)
variance_scale_choice <- 0.05
noise_rep_ID <- 1:50
all_noise_combinations<-expand.grid(noise_rep_ID,interval_choice,
k_choice, p_choice, variance_scale_choice)
colnames(all_noise_combinations)<-c("noise_rep_ID","interval","k","p","variance_scale")Generation of environmental stochasticity with the autoregressive method
#transform all_noise_combinations to a list for the application of lapply function
all_noise_combinations_list<-list()
for(i in 1:nrow(all_noise_combinations)){
parms<-as.numeric(all_noise_combinations[i,])
all_noise_combinations_list[[i]]<-parms
}
#define the function to generate time series of environmental stochasticity
noise_generation_func<-function(parms){
times_range<-c(0,1000) # the time range of the noise series
noise_rep_ID<-parms[1]
set.seed(noise_rep_ID) # with this I control the standard normal variable to be consistent for the same noise_rep_ID
interval<-parms[2]
k<-parms[3]
p<-parms[4]
variance_scale<-parms[5]
noise_number<-(times_range[2]-times_range[1])/interval+1
noise_species_1<-rep(0,length(noise_number));
noise_species_2<-rep(0,length(noise_number));
noise_species_3<-rep(0,length(noise_number));
noise_species_4<-rep(0,length(noise_number));
v1 <- (variance_scale)^0.5 ;
v2 <- (variance_scale)^0.5 ;
v3 <- (variance_scale)^0.5 ;
v4 <- (variance_scale)^0.5 ;
foai <- rnorm(noise_number) ;
w1 <- rnorm(noise_number) ;
w2 <- rnorm(noise_number) ;
w3 <- rnorm(noise_number) ;
w4 <- rnorm(noise_number) ;
beita <- ( (1-abs(p)) / abs(p) )^0.5
for(t in 1:(noise_number-1)){
noise_species_1[t+1] <- k*noise_species_1[t] + v1*(1-k^2)^0.5*(foai[t]+beita*w1[t])/(1+beita^2)^0.5
noise_species_2[t+1] <- k*noise_species_2[t] + v2*(1-k^2)^0.5*(foai[t]+beita*w2[t])/(1+beita^2)^0.5
noise_species_3[t+1] <- k*noise_species_3[t] + v3*(1-k^2)^0.5*(foai[t]+beita*w3[t])/(1+beita^2)^0.5
noise_species_4[t+1] <- k*noise_species_4[t] + v4*(1-k^2)^0.5*(foai[t]+beita*w4[t])/(1+beita^2)^0.5
}
noise<-cbind(noise_species_1,noise_species_2,noise_species_3,noise_species_4)
return(noise)
}
#generate the noise dataset
ALL_NOISE_DATASET<-lapply(all_noise_combinations_list, noise_generation_func)Spectral mimicry
- The environmental stochasticity generated using the AR method may disobey normal distribution.
- Spectral mimicry was conducted to control the distribution shape of the environmental stochasticity.
- Details for the performance of spectral mimicry can be found in Cohen et al. 1999 (Spectral mimicry: A method of synthesizing matching time series with different Fourier spectra) and Fowler & Ruokolainen 2013 (Confounding Environmental Colour and Distribution Shape Leads to Underestimation of Population Extinction Risk).
ALL_NOISE_DATASET_spectral_mimicry <- list()
for(i in 1:750){
noise <- ALL_NOISE_DATASET[[i]]
noise.character <- all_noise_combinations_list[[i]]
rep.id <- noise.character[1]
set.seed(rep.id) # set.consistency for the same replicates id
k <- 0
x1 <- noise[, 1]; x1.rank <- rank(x1);
x2 <- noise[, 2]; x2.rank <- rank(x2);
x3 <- noise[, 3]; x3.rank <- rank(x3);
x4 <- noise[, 4]; x4.rank <- rank(x4);
while(k < 1){
y1 <- rnorm(1001, mean = 0, sd = 0.05^0.5);
y2 <- rnorm(1001, mean = 0, sd = 0.05^0.5);
y3 <- rnorm(1001, mean = 0, sd = 0.05^0.5);
y4 <- rnorm(1001, mean = 0, sd = 0.05^0.5);
test.on.y1 <- jarque.bera.test(y1)$p.value
test.on.y2 <- jarque.bera.test(y2)$p.value
test.on.y3 <- jarque.bera.test(y3)$p.value
test.on.y4 <- jarque.bera.test(y4)$p.value
k <- as.numeric(jarque.bera.test(y1)$p.value>0.05 & jarque.bera.test(y1)$p.value>0.05 & jarque.bera.test(y1)$p.value>0.05 & jarque.bera.test(y1)$p.value>0.05)
}
y1 <- sort(y1)
y2 <- sort(y2)
y3 <- sort(y3)
y4 <- sort(y4)
z1 <- y1[x1.rank]
z2 <- y2[x2.rank]
z3 <- y3[x3.rank]
z4 <- y4[x4.rank]
noise <- cbind(z1,z2,z3,z4)
colnames(noise) <- c('noise_species_1', 'noise_species_2', 'noise_species_3', 'noise_species_4')
ALL_NOISE_DATASET_spectral_mimicry[[i]] <- noise
rm(noise)
}
# put the original noise and the new noise together
for(i in 1:750){
m <- i + 750
ALL_NOISE_DATASET[[m]] <- ALL_NOISE_DATASET_spectral_mimicry[[i]]
}
# save the new noise dataset
save(ALL_NOISE_DATASET,file="mydata_ALL_NOISE_DATASET.Rdata")Compare the realized autocorrelation coefficients of the generated environmental stochasticity with the designed ones
load("mydata_ALL_NOISE_DATASET.Rdata")
all_noise_combinations <- rbind(all_noise_combinations, all_noise_combinations)
# add indicator of if the noise is original or mimicry
all_noise_combinations <- mutate(all_noise_combinations,
spectral_mimicry = rep(c('nonmimicry','mimicry'), each = 750))
# add empty columns of the realized autocorrelaiton coefficient and the correlation
all_noise_combinations <- mutate(all_noise_combinations,
auto.cor.noise.1 = numeric(1500),
auto.cor.noise.2 = numeric(1500),
auto.cor.noise.3 = numeric(1500),
auto.cor.noise.4 = numeric(1500),
mean.correlation = numeric(1500))
for(i in 1:1500){
noise <- ALL_NOISE_DATASET[[i]]
all_noise_combinations$auto.cor.noise.1[i] <- acf(noise[,1], plot = FALSE)$acf[2]
all_noise_combinations$auto.cor.noise.2[i] <- acf(noise[,2], plot = FALSE)$acf[2]
all_noise_combinations$auto.cor.noise.3[i] <- acf(noise[,3], plot = FALSE)$acf[2]
all_noise_combinations$auto.cor.noise.4[i] <- acf(noise[,4], plot = FALSE)$acf[2]
cor.matrix <- cor(noise)
all_noise_combinations$mean.correlation[i] <- mean(cor.matrix[upper.tri(cor.matrix)])
}
save(all_noise_combinations, file='all_noise_combinations.Rdata')load(file='all_noise_combinations.Rdata')
df1 <- select(all_noise_combinations, -interval, -variance_scale, -mean.correlation, -p)
df1 <- mutate(df1, k=k+rep(c(-0.025,0.025),each=750)) #avoied overlap of the different groups of mimicry
df1 <- melt(df1, id.vars = c("noise_rep_ID", "k", "spectral_mimicry")) # to long format
to_string <- as_labeller(c(`auto.cor.noise.1` = "noise on species 1",
`auto.cor.noise.2` = "noise on species 2",
`auto.cor.noise.3` = "noise on species 3",
`auto.cor.noise.4` = "noise on species 4")) # change the title of each subplot
ggplot(data=df1, aes(x=k,y=value,color=spectral_mimicry)) +
geom_point(size=0.5) +
facet_wrap(~variable, labeller = 'to_string' ) +
theme_bw() +
scale_x_continuous(breaks = c(-0.8, -0.4, 0, 0.4, 0.8)) +
geom_abline(slope = 1, linetype = 'dotted') +
xlab('Designed autocorrelation coefficients') +
ylab('Realized autocorrelation coefficients') +
theme(legend.title=element_blank(), legend.position = 'right',
legend.text = element_text(size = 14),
strip.text.x = element_text(size = 14),
axis.title.x = element_text(colour="black",size=16),
axis.title.y = element_text(colour="black",size=16),
axis.text.x = element_text(colour="black",size=12),
axis.text.y = element_text(colour="black",size=12)) +
guides(colour = guide_legend(override.aes = list(size=6)))Compare the realized correlation between species’ response with the designed ones
df2 <- select(all_noise_combinations, -interval, -variance_scale, -k,
-auto.cor.noise.1, -auto.cor.noise.2, -auto.cor.noise.3, -auto.cor.noise.4)
df2 <- mutate(df2, p=p+rep(c(-0.01,0.01),each=750)) #avoied overlap of the different groups of mimicry
ggplot(data=df2, aes(x=p,y=mean.correlation,color=spectral_mimicry)) +
geom_point(size=1) +
theme_bw() +
scale_x_continuous(breaks = c(0.2, 0.5, 0.8)) +
geom_abline(slope = 1, linetype = 'dotted') +
xlab('Designed correlation coefficients') +
ylab('Realized correlation coefficients') +
theme(legend.title=element_blank(), legend.position = 'right',
legend.text = element_text(size = 13),
axis.title.x = element_text(colour="black",size=13),
axis.title.y = element_text(colour="black",size=13),
axis.text.x = element_text(colour="black",size=11),
axis.text.y = element_text(colour="black",size=11)) +
guides(colour = guide_legend(override.aes = list(size=6)))Simulation of food-web dynamics
Models of food-web dynamics
The dynamics of our simple modules are described by the general Lotka-Volterra system (Pimm & Lawton 1977, 1978; Emmerson & Yearsley 2004): \[\frac{dN_i}{dt}=N_{i}(t)(r_i+\sum_{j=1}^4a_{ij}N_j(t)+\epsilon_i(t))\]
where \(i\) and \(j\) are the identity of species in the food-web, \(N_i\) is the population density of species \(i\), \(r_i\) is the intrinsic growth/decay rate (positive for basal species; otherwise negative), \(a_{ij}\) is the interaction coefficient that describes the per capita effect of the \(j^{th}\) species on the growth/decay rate of the \(i^{th}\) species (positive if it enhances population growth; negative if it causes decreases in density), \(\epsilon_i(t)\) is the specific response to environmental stochasticity.
The perturbation and conduction of simulation
- For each food-webs, we conducted a perturbation by reducing the density of the top species by 50%.
- We simulated the dynamics of the food-webs (with perturbation and without perturbation) in each of the 1500 set of stochastic environment.
- We simulated 1000 steps of the food-web dynamics with a step length of 1.
- The simulation requires very heavy computation. The task was finished with the cluster TCHPC in Trinity College Dublin. The whole simulation takes about 1 CPU month and generated about 65 GB simulation data (the original simulation data is not provided for its huge size).
Define a function to simulate food-web dynamics after perturbation in noisy environment
- In the defined function, argument m is the id of the module (old.motif.id)
- i is the id of the food-webs generated for module i
- j is the j^{th} environmental stochasticity time series
sta.simu.pert<-function(m,i,j){
factors<-all_noise_combinations[j,2:5]
interval<-as.numeric(factors[1])
times<-seq(0,1000,interval)
noise_species_1<-ALL_NOISE_DATASET[[j]][,1]
noise_species_2<-ALL_NOISE_DATASET[[j]][,2]
noise_species_3<-ALL_NOISE_DATASET[[j]][,3]
noise_species_4<-ALL_NOISE_DATASET[[j]][,4]
noise_times_1<-as.data.frame(list(times = times, import1= rep(0, length(times))))
noise_times_1$import1<- noise_species_1 #noise values
input1<-approxfun(noise_times_1, method="linear",rule=2)
noise_times_2<-as.data.frame(list(times = times, import2= rep(0, length(times))))
noise_times_2$import2<- noise_species_2 #noise values
input2<-approxfun(noise_times_2, method="linear",rule=2)
noise_times_3<-as.data.frame(list(times = times, import3= rep(0, length(times))))
noise_times_3$import3<- noise_species_3 #noise values
input3<-approxfun(noise_times_3, method="linear",rule=2)
noise_times_4<-as.data.frame(list(times = times, import4= rep(0, length(times))))
noise_times_4$import4<- noise_species_4 #noise values
input4<-approxfun(noise_times_4, method="linear",rule=2)
community<-mydata_foodweb_set[[m]][[i]]
r <-community$R
a <-community$A_matrix
parms <- list(r, a)
Neq<-community$Neq
initialN<-c(1,1,1,0.5)*Neq
webpres<- function(t, n, parms){
import1 <- input1(t);
import2 <- input2(t);
import3 <- input3(t);
import4 <- input4(t);
r <- parms[[1]]
a <- parms[[2]]
e <- c(import1,import2,import3,import4)
dn.dt <- n * (r + (a %*% n) + e)
return(list(c(dn.dt))) #provide the base mechanisms for defining new functions in the R language.
}
mydata_simulation<- ode(y = initialN, times =T, func=webpres, parms = parms)
mydata_simulation_new<-signif(mydata_simulation,4)
mydata_simulation_new<-mydata_simulation_new[,2:5]
save(mydata_simulation_new,file=paste('foodweb_pert',m,i,j,'.Rdata',sep=' '))
}Define a function to simulate food-web dynamics in noisy environment - no perturbation conducted
sta.simu.nonpert<-function(m,i,j){
factors<-all_noise_combinations[j,2:5]
interval<-as.numeric(factors[1])
times<-seq(0,1000,interval)
noise_species_1<-ALL_NOISE_DATASET[[j]][,1]
noise_species_2<-ALL_NOISE_DATASET[[j]][,2]
noise_species_3<-ALL_NOISE_DATASET[[j]][,3]
noise_species_4<-ALL_NOISE_DATASET[[j]][,4]
noise_times_1<-as.data.frame(list(times = times, import1= rep(0, length(times))))
noise_times_1$import1<- noise_species_1 #noise values
input1<-approxfun(noise_times_1, method="linear",rule=2)
noise_times_2<-as.data.frame(list(times = times, import2= rep(0, length(times))))
noise_times_2$import2<- noise_species_2 #noise values
input2<-approxfun(noise_times_2, method="linear",rule=2)
noise_times_3<-as.data.frame(list(times = times, import3= rep(0, length(times))))
noise_times_3$import3<- noise_species_3 #noise values
input3<-approxfun(noise_times_3, method="linear",rule=2)
noise_times_4<-as.data.frame(list(times = times, import4= rep(0, length(times))))
noise_times_4$import4<- noise_species_4 #noise values
input4<-approxfun(noise_times_4, method="linear",rule=2)
community<-mydata_foodweb_set[[m]][[i]]
r <-community$R
a <-community$A_matrix
parms <- list(r, a)
Neq<-community$Neq
initialN<-c(1,1,1,1)*Neq
webpres<- function(t, n, parms){
import1 <- input1(t);
import2 <- input2(t);
import3 <- input3(t);
import4 <- input4(t);
r <- parms[[1]]
a <- parms[[2]]
e <- c(import1,import2,import3,import4)
dn.dt <- n * (r + (a %*% n) + e)
return(list(c(dn.dt))) #provide the base mechanisms for defining new functions in the R language.
}
mydata_simulation<- ode(y = initialN, times =T, func=webpres, parms = parms)
mydata_simulation_new<-signif(mydata_simulation,4)
mydata_simulation_new<-mydata_simulation_new[,2:5]
save(mydata_simulation_new,file=paste('foodweb_nonpert',m,i,j,'.Rdata',sep=' '))
}Conduction of the simulation
for(m in 1:14){
empty.list <- list()
for(j in 1:1500){
empty.list[[j]]<-NA
}
for(i in 1:100){
mydata <- empty.list
for(j in 1:1500){
evalWithTimeout(sta.simu.pert(m,i,j),timeout=60, onTimeout="silent");
#give up a simulation if it taks longer than 1 minutes
try(
{
load(file=paste('foodweb_pert',m,i,j,'.Rdata',sep=' '))
mydata[[j]]<-mydata_simulation_new
file.remove(paste('foodweb_pert',m,i,j,'.Rdata',sep=' '))
}
)
}
save(mydata, file=paste('myfoodweb_pert',m,i,'.Rdata',sep=' '))
}
empty.list <- list()
for(j in 1:1500){
empty.list[[j]]<-NA
}
for(i in 1:100){
mydata <- empty.list
for(j in 1:1500){
evalWithTimeout(sta.simu.nonpert(m,i,j),timeout=60, onTimeout="silent");
try(
{
load(file=paste('foodweb_nonpert',m,i,j,'.Rdata',sep=' '))
mydata[[j]]<-mydata_simulation_new
file.remove(paste('foodweb_nonpert',m,i,j,'.Rdata',sep=' '))
}
)
}
save(mydata, file=paste('myfoodweb_nonpert',m,i,'.Rdata',sep=' '))
}
}Stability quantification
Quantification of recovery time
Define a function to quantify the food-web level recovery time
# take the difference smaller than 0.01 as the recovery criteria
# the smaller value should be kept in the next 50 steps if the smaller value happens before step 950
# the smaller value should be kept until the simulation end (step 1001) if the smaller value happens between 950 and 980
# if the smaller value happens after step 980, the recovery time is assigned NA
# this function also consider if there is any species that will be excluded for quantification, and if the biomass of each population is scaled to its equilibrium
recovery.time.f<-function(dat1,dat2,species.to.exclude=c(),biomass.neq.scale=FALSE){
if(any(is.na(dat1))|any(is.na(dat2))){
recovery_time <- NA ## if there is any NA value in dat1 or dat2, the recovery time is NA
}else{
nrow1<- nrow(dat1)
nrow2<- nrow(dat2)
nrow <- min(nrow1,nrow2)
dat1 <- dat1[1:nrow, ]
dat2 <- dat2[1:nrow, ] ## make dat1 and dat2 of same row number
if(biomass.neq.scale==TRUE){
neq1 <- dat1[1, ] * c(1,1,1,2)
neq2 <- dat2[1, ]
dat1 <- sweep(dat1,2,neq1, "/")
dat2 <- sweep(dat2,2,neq2, "/")
}
NewColumnID<-setdiff(1:4,species.to.exclude)
dat1 <- dat1[,NewColumnID]
dat2 <- dat2[,NewColumnID]
dat <- dat1-dat2
difv <- apply(dat, 1, function(x)all(abs(x)<=0.01)) ## which row the dat1 and dat2 difference is smaller than 0.01
difv.true.id <- which(difv)
recovery_time <- NA
for( k in difv.true.id){
if(k <= 950){
check <- sum(difv[k:(k+49)])/50 ## ratio of trues in the next 50 steps
}else
if(k <= 980){
check <- sum(difv[k:1001])/length(k:1001) ## ratio of trues in the next steps until the final step
}else{
check <- 0
}
if(check>0.99){ # the ratio of trues in the next steps is larger than 95%
recovery_time <- k
break
}
}
return(recovery_time)
}
}Quantify the food-web level recovery time
#I save the recover data in to small files and compiled them together as a data.frame
ee<-list()
for(j in 1:1500){
ee[[j]]<-NA
}
for(m in 1:14){
for(i in 1:100){
load(file=paste('myfoodweb_pert',m,i,'.Rdata',sep=' '))
mydata_pert<- mydata; rm(mydata)
load(file=paste('myfoodweb_nonpert',m,i,'.Rdata',sep=' '))
mydata_nonpert<- mydata; rm(mydata)
recovery.list <- ee
for(j in 1:1500){
dat1 <- mydata_pert[[j]]
dat2 <- mydata_nonpert[[j]]
recovery.time <- tryCatch( recovery.time.f(dat1,dat2,species.to.exclude=c(),biomass.neq.scale=FALSE), error=function(err) NA)
recovery.list[[j]] <- recovery.time
}
save(recovery.list, file=paste('mydata.recovery.time.list.0.01.sub.sub.4spp.nonscaled',m,i,'.Rdata',sep=' ') )
}
}# compile the small list file of the food-web level recovery time
ee <- list(); for(j in 1 : 1500) ee[[j]] <- NA
ff <- list(); for(i in 1 : 100) ff[[i]] <- ee
gg <- list(); for(m in 1 : 14) gg[[m]] <- ff
mydata.recovery.0.01.nonscaled.set <- gg
for(m in 1:14){
for(i in 1:100){
load(file=paste('mydata.recovery.time.list.0.01.sub.sub.4spp.nonscaled',m,i,'.Rdata',sep=' '))
mydata.recovery.0.01.nonscaled.set[[m]][[i]] <- recovery.list; rm(recovery.list)
}
}Quantification of resistance
The food-web level resistance is quantified as maximum Euclidean distance between the perturbed food-web and the unperturbed food-web.
Define a function to quantify the food-web level resistance
- I defined a function that can use either Euclidean distance or bray-curtis distance.
- In the paper, I only presented the result of Euclidean distance (as using the two distance gave same quanlitative result).
resistance.f<-function(dat1=dat1,dat2=dat2,species.to.exclude=c(),biomass.neq.scale=FALSE,distance.method='euclidean'){
if(biomass.neq.scale==TRUE){
neq1 <- dat1[1, ] * c(1,1,1,2)
neq2 <- dat2[1, ]
dat1 <- sweep(dat1,2,neq1, "/")
dat2 <- sweep(dat2,2,neq2, "/")
}
NewColumnID<-setdiff(1:4,species.to.exclude)
dat1 <- dat1[,NewColumnID]
dat2 <- dat2[,NewColumnID]
if(length(NewColumnID)==3){
if(distance.method=='euclidean'){
dis = sapply(1:nrow(dat1),function(u)sqrt(sum((dat1[u,]-dat2[u,])^2)))
}else{
dis =(abs(dat1[,1]-dat2[,1])+abs(dat1[,2]-dat2[,2])+abs(dat1[,3]-dat2[,3]))/(rowSums(dat1[,1:3])+rowSums(dat2[,1:3]))
}
}else{
if(distance.method=='euclidean'){
dis = sapply(1:nrow(dat1),function(u)sqrt(sum((dat1[u,]-dat2[u,])^2)))
}else{
dis =(abs(dat1[,1]-dat2[,1])+abs(dat1[,2]-dat2[,2])+abs(dat1[,3]-dat2[,3])+abs(dat1[,4]-dat2[,4]))/(rowSums(dat1[,1:4])+rowSums(dat2[,1:4]))
}
}
resistance.instability <- max(dis)
resistance.instability.moment <- which.max(dis)
result <- c(resistance.instability, resistance.instability.moment)
setNames(result,c('resistance.instability','resistance.instability.moment'))
}Quantify the food-web level resistance
I saved the resistance data in to small files and compiled them together as a data.frame.
aa<-list()
for(j in 1:1500){
aa[[j]]<-NA
}
resistance.list.f<-function(m,i){
load(file=paste('myfoodweb_pert',m,i,'.Rdata',sep=' '))
mydata_pert<- mydata; rm(mydata)
load(file=paste('myfoodweb_nonpert',m,i,'.Rdata',sep=' '))
mydata_nonpert<- mydata; rm(mydata)
resistance.4spp.nonscaled.sub.sub.list<-aa
load(file=paste('mydata.recovery.time.list.0.01.sub.sub.4spp.nonscaled',m,i,'.Rdata',sep=' '))
for(j in 1:1500){
dat1 <- mydata_pert[[j]]
dat2 <- mydata_nonpert[[j]]
t.end <- recovery.list[[j]]
resistance.euclidean <- tryCatch(resistance.f(dat1=dat1[1:t.end,],dat2=dat2[1:t.end,],species.to.exclude=c(),biomass.neq.scale=FALSE,distance.method='euclidean'), error=function(err) c(NA,NA))
result <- resistance.euclidean
names(result) <- c('rs.euclidean.value','rs.euclidean.time')
resistance.4spp.nonscaled.sub.sub.list[[j]] <- result
}
save(resistance.4spp.nonscaled.sub.sub.list,file=paste('mydata.resistance.4spp.nonscaled.sub.sub.list',m,i,'.Rdata',sep=' '))
}
for(m in 1:14){
for(i in 1:100){
try(
resistance.list.f(m,i)
)
}
}Compile the small list file of the food-web level resistance
mydata.resistance.4spp.nonscaled.set <- list()
for(m in 1:14){
subset <- list()
for(i in 1:100){
load(file=paste('mydata.resistance.4spp.nonscaled.sub.sub.list',m,i,'.Rdata',sep=' '))
subset[[i]] <- resistance.4spp.nonscaled.sub.sub.list
rm(resistance.4spp.nonscaled.sub.sub.list)
}
mydata.resistance.4spp.nonscaled.set[[m]] <- subset
}
save(mydata.resistance.4spp.nonscaled.set,file='mydata.resistance.4spp.nonscaled.set.Rdata')Quantification of variability
The food-web level variability is quantified as the CV of total density of the nonperturbed food-web during the simulation time
Define a function for quantify the food-web level variability
#variability type:
#type I: community: cv of total population # this is the variability we used in the paper
#type II: population: sum of cv of single populations
#type III: population.biomass.weight: sum of cv of single populations with the mean biomass of each population as weight
# if detrending.method != 'None', cv is based on the sd of the residuals
# detrending method: non,linear, loess
variability.f<-function(dat,variability.type='community',species.to.exclude=c(),detrending.method='None',biomass.neq.scale=FALSE,neq,degree,span){
times <- 1:nrow(dat)
biomass <- dat
NewColumnID<-setdiff(1:4,species.to.exclude)
biomass <- biomass[, NewColumnID]
if(biomass.neq.scale==TRUE){
biomass <- sweep(biomass,2,neq, "/")
}else{
biomass <- biomass
}
if(detrending.method=='None'){
if(variability.type=='community'){
total.N <- rowSums(biomass)
result <- sd(total.N)/mean(total.N)
}else
if(variability.type=='population'){
sd_N<-apply(biomass,2,sd)
mean_N<-apply(biomass,2,mean)
cv_N<-sd_N/mean_N
result<-mean(cv_N)
}else
if(variability.type=='population.biomass.weighted'){
sd_N<-apply(biomass,2,sd)
mean_N<-apply(biomass,2,mean)
cv_N<-sd_N/mean_N
weights <- mean_N/mean(rowSums(biomass))
result<- sum(cv_N*weights)
}else{
result<-'no such variability.type'
}
}else{
if(detrending.method=='linear'){
total.biomass.residuals <-lm(rowSums(biomass)~times)$residuals
pop.biomass.residuals <- apply(biomass, 2, function(z)lm(z~times)$residuals)
}else
if(detrending.method=='loess'){
total.biomass.residuals <- loess(rowSums(biomass)~times,degree=2,span=0.05)$residuals
pop.biomass.residuals <- apply(biomass, 2, function(z)loess(z~times,degree=2,span=0.05)$residuals)
}else{
result<-'no such detrending method'
}
if(variability.type=='community'){
total.N <- rowSums(biomass)
result <- sd(total.biomass.residuals)/mean(total.N)
}else
if(variability.type=='population'){
sd_N<-apply(pop.biomass.residuals,2,sd)
mean_N<-apply(biomass,2,mean)
cv_N<-sd_N/mean_N
result<-mean(cv_N)
}else
if(variability.type=='population.biomass.weighted'){
sd_N<-apply(pop.biomass.residuals,2,sd)
mean_N<-apply(biomass,2,mean)
cv_N<-sd_N/mean_N
weights <- mean_N/mean(rowSums(biomass))
result<- sum(cv_N*weights)
}else{
result<-'no such variability.type'
}
}
return(result)
}Quantify the food-web level variability
aa<-list()
for(j in 1:1500){
aa[[j]]<-rep(NA,12)
}
varv.f<-function(m,i){
load(file=paste('myfoodweb_nonpert',m,i,'.Rdata',sep=' '))
mydata_nonpert<- mydata; rm(mydata)
variability.list <- aa
for(j in 1:1500){
dat <- mydata_nonpert[[j]]
variability.4spp.nonscaled.community.level<-tryCatch(variability.f(dat,variability.type='community',species.to.exclude=c(),detrending.method='None',biomass.neq.scale=FALSE,neq=dat[1,1:4]) , error=function(err) NA)
result <- variability.4spp.nonscaled.community.level
variability.list[[j]] <- result
}
save(variability.list, file=paste('mydata.variability.list.sub.sub',m,i,'.Rdata',sep=' ') )
}
for(m in 1:14){
for(i in 1:100){
try( varv.f(m,i))
}
}Compile the small list file of the food-web level variability
mydata.variability.set <- list()
for(m in 1:14){
subset <- list()
for(i in 1:100){
load(file=paste('mydata.variability.list.sub.sub',m,i,'.Rdata',sep=' '))
subset[[i]] <- variability.list
rm(variability.list)
}
mydata.variability.set[[m]] <- subset
}
save(mydata.variability.set,file='mydata.variability.set.Rdata')Compile stability components together
Define a function for transforming the data.list to data.frame
change.list.to.matrix.f<-function(mylist, ncolumns){
dat0 <- unlist(mylist); result <- matrix(dat0,byrow=TRUE,ncol=ncolumns)
}Transform list to data.frame
load(file='mydata.recovery.0.01.nonscaled.set.Rdata')
mydata.recovery.0.01.nonscaled.data.frame = change.list.to.matrix.f(mydata.recovery.0.01.nonscaled.set, ncolumns = 1)
load(file='mydata.resistance.4spp.nonscaled.set.Rdata')
mydata.resistance.4spp.nonscaled.data.frame = change.list.to.matrix.f(mydata.resistance.4spp.nonscaled.set, ncolumns = 2)
load(file='mydata.variability.set.Rdata')
mydata.variability.data.frame = change.list.to.matrix.f(mydata.variability.set, ncolumns = 1)Compile stability components together
sta.data.matrix<-cbind(
mydata.recovery.0.01.nonscaled.data.frame,
mydata.resistance.4spp.nonscaled.data.frame,
mydata.variability.data.frame
)
mydata_stability <- as.data.frame(sta.data.matrix)
colnames(mydata_stability) <- c(
'recovery.4spp.nonscaled.0.01',
'resistance.4spp.nonscaled.euclidean.value',
'variability.4spp.nonscaled.community.level')Add information of the noise
## add group, community and noise id
combinations <- expand.grid(1:1500,1:100,1:14)
row.id <- 1:nrow(combinations)
factors.comb <- cbind(row.id,combinations)
colnames(factors.comb) <- c('row.id', 'noise.id', 'community.id', 'old.motif.id')
factors.comb <- as.data.frame(factors.comb)
mydata_stability <- as.data.frame(cbind(factors.comb ,mydata_stability))
# add information of noise
load(file='all_noise_combinations.Rdata')
all_noise_combinations <- mutate(all_noise_combinations, noise.id = 1:1500)
mydata_stability <- left_join(mydata_stability, all_noise_combinations)add information of motifs
#the id of the old and new motif id
motif.info <- data.frame(old.motif.id = 1:14,
new.motif.id = c(1, 3:14, 2))
#arrange the order of motif.info by new.motif.id (I copy the data from a previous MS)
motif.info <- arrange(motif.info, new.motif.id)
# add motif characters
motif.info <- mutate(motif.info,
no.trophic.levels = c(4,3,3,2,2,4,3,4,4,4,4,4,3,4),
no.basal.species = c(1,1,2,3,2,1,2,1,1,1,1,1,2,1),
no.omnivor.species.trophic.criteria = c(0,0,0,0,0,1,1,1,1,2,1,2,1,2),
no.omnivor.links.trophic.criteria = c(0,0,0,0,0,1,1,1,1,2,2,2,2,3),
no.omnivor.species.plant.criteria = c(0,0,0,0,0,0,1,1,1,1,1,2,1,2),
no.omnivor.links.plant.criteria = c(0,0,0,0,0,0,1,1,1,1,1,2,2,2),
connectance_prey_predator = c(3,4,3,3,4,4,4,4,4,5,5,5,5,6)/16,
connectance_include_competiton = c(3,4,4,6,5,4,5,4,4,5,5,5,6,6)/16
)
mydata_stability <- left_join(mydata_stability, motif.info)add information of food-webs
load(file='mydata_foodweb_set_new_method.Rdata')
foodweb.info <- as.data.frame(expand.grid(1:14, 1:100))
colnames(foodweb.info) <- c('old.motif.id', 'community.id')
maxRel.value <- numeric(1400)
max.Neq <- numeric(1400)
min.Neq <- numeric(1400)
min.abs.R <- numeric(1400)
mean.upper.jacobian <- numeric(1400)
mean.lower.jacobian <- numeric(1400)
mean.diag.jacobian <- numeric(1400)
diag.jacobian.dominant.upper <- numeric(1400)
diag.jacobian.dominant.lower <- numeric(1400)
mean.upper.a.matrix <- numeric(1400)
mean.lower.a.matrix <- numeric(1400)
mean.diag.a.matrix <- numeric(1400)
log.neq1.to.R1 <- numeric(1400)
log.neq2.to.R2 <- numeric(1400)
log.neq3.to.R3 <- numeric(1400)
log.neq4.to.R4 <- numeric(1400)
for(k in 1:1400){
m <- foodweb.info[k, 1]
i <- foodweb.info[k, 2]
df <- mydata_foodweb_set[[m]][[i]]
maxRel.value[k] <- df$maxreL
max.Neq[k] <- max(df$Neq)
min.Neq[k] <- min(df$Neq)
min.abs.R[k] <- min(abs(df$R))
upper.jacobian <- df$jacobian_matrix[upper.tri(df$jacobian_matrix)]
mean.upper.jacobian[k] <- mean(upper.jacobian[upper.jacobian!=0])
lower.jacobian <- df$jacobian_matrix[lower.tri(df$jacobian_matrix)]
mean.lower.jacobian[k] <- mean(lower.jacobian[lower.jacobian!=0])
diag.jacobian <- diag(df$jacobian_matrix)
mean.diag.jacobian[k] <- mean(diag.jacobian[diag.jacobian!=0])
diag.jacobian.dominant.upper[k] <- mean.diag.jacobian[k]/mean.upper.jacobian[k]
diag.jacobian.dominant.lower[k] <- mean.diag.jacobian[k]/mean.lower.jacobian[k]
upper.a.matrix <- df$A_matrix[upper.tri(df$A_matrix)]
mean.upper.a.matrix[k] <- mean(upper.a.matrix[upper.a.matrix!=0])
lower.a.matrix <- df$A_matrix[lower.tri(df$A_matrix)]
mean.lower.a.matrix[k] <- mean(lower.a.matrix[lower.a.matrix!=0])
diag.a.matrix <- diag(df$A_matrix)
mean.diag.a.matrix[k] <- mean(diag.a.matrix[diag.a.matrix!=0])
log.neq1.to.R1[k] <- log(df$Neq[1])/df$R[1]
log.neq2.to.R2[k] <- log(df$Neq[2])/df$R[2]
log.neq3.to.R3[k] <- log(df$Neq[3])/df$R[3]
log.neq4.to.R4[k] <- log(df$Neq[4])/df$R[4]
}
foodweb.info <- mutate(foodweb.info,
maxRel.value = maxRel.value,
max.Neq = max.Neq,
min.Neq = min.Neq,
min.abs.R = min.abs.R,
mean.upper.jacobian = mean.upper.jacobian,
mean.lower.jacobian = mean.lower.jacobian,
mean.diag.jacobian = mean.diag.jacobian,
diag.jacobian.dominant.upper = diag.jacobian.dominant.upper,
diag.jacobian.dominant.lower = diag.jacobian.dominant.lower,
mean.upper.a.matrix = mean.upper.a.matrix,
mean.lower.a.matrix = mean.lower.a.matrix,
mean.diag.a.matrix = mean.diag.a.matrix,
log.neq1.to.R1 = log.neq1.to.R1,
log.neq2.to.R2 = log.neq2.to.R2,
log.neq3.to.R3 = log.neq3.to.R3,
log.neq4.to.R4 = log.neq4.to.R4)
mydata_stability <- left_join(mydata_stability, foodweb.info)
save(mydata_stability, file = 'mydata_stability.Rdata')Have a look at the stability data columns
load(file='mydata_stability.Rdata')
str(mydata_stability)## 'data.frame': 2100000 obs. of 44 variables:
## $ row.id : int 1 2 3 4 5 6 7 8 9 10 ...
## $ noise.id : int 1 2 3 4 5 6 7 8 9 10 ...
## $ community.id : int 1 1 1 1 1 1 1 1 1 1 ...
## $ old.motif.id : int 1 1 1 1 1 1 1 1 1 1 ...
## $ recovery.4spp.nonscaled.0.01 : num 455 399 467 449 514 520 486 539 436 3 ...
## $ resistance.4spp.nonscaled.euclidean.value : num 0.076 0.0798 0.0405 0.0519 0.0556 ...
## $ resistance.4spp.nonscaled.euclidean.time : num 125 189 181 120 114 84 83 88 72 1 ...
## $ variability.4spp.nonscaled.community.level: num 0.0578 0.081 0.0543 0.0604 0.063 ...
## $ noise_rep_ID : int 1 2 3 4 5 6 7 8 9 10 ...
## $ interval : num 1 1 1 1 1 1 1 1 1 1 ...
## $ k : num -0.8 -0.8 -0.8 -0.8 -0.8 -0.8 -0.8 -0.8 -0.8 -0.8 ...
## $ p : num 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 ...
## $ variance_scale : num 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 ...
## $ spectral_mimicry : chr "nonmimicry" "nonmimicry" "nonmimicry" "nonmimicry" ...
## $ auto.cor.noise.1 : num -0.802 -0.833 -0.782 -0.813 -0.788 ...
## $ auto.cor.noise.2 : num -0.79 -0.807 -0.796 -0.794 -0.789 ...
## $ auto.cor.noise.3 : num -0.781 -0.813 -0.796 -0.789 -0.807 ...
## $ auto.cor.noise.4 : num -0.801 -0.799 -0.764 -0.798 -0.825 ...
## $ mean.correlation : num 0.185 0.204 0.208 0.217 0.239 ...
## $ new.motif.id : num 1 1 1 1 1 1 1 1 1 1 ...
## $ no.trophic.levels : num 4 4 4 4 4 4 4 4 4 4 ...
## $ no.basal.species : num 1 1 1 1 1 1 1 1 1 1 ...
## $ no.omnivor.species.trophic.criteria : num 0 0 0 0 0 0 0 0 0 0 ...
## $ no.omnivor.links.trophic.criteria : num 0 0 0 0 0 0 0 0 0 0 ...
## $ no.omnivor.species.plant.criteria : num 0 0 0 0 0 0 0 0 0 0 ...
## $ no.omnivor.links.plant.criteria : num 0 0 0 0 0 0 0 0 0 0 ...
## $ connectance_prey_predator : num 0.188 0.188 0.188 0.188 0.188 ...
## $ connectance_include_competiton : num 0.188 0.188 0.188 0.188 0.188 ...
## $ maxRel.value : num -0.00545 -0.00545 -0.00545 -0.00545 -0.00545 ...
## $ max.Neq : num 0.821 0.821 0.821 0.821 0.821 ...
## $ min.Neq : num 0.0208 0.0208 0.0208 0.0208 0.0208 ...
## $ min.abs.R : num 0.00129 0.00129 0.00129 0.00129 0.00129 ...
## $ mean.upper.jacobian : num -0.202 -0.202 -0.202 -0.202 -0.202 ...
## $ mean.lower.jacobian : num 0.0137 0.0137 0.0137 0.0137 0.0137 ...
## $ mean.diag.jacobian : num -0.216 -0.216 -0.216 -0.216 -0.216 ...
## $ diag.jacobian.dominant.upper : num 1.07 1.07 1.07 1.07 1.07 ...
## $ diag.jacobian.dominant.lower : num -15.7 -15.7 -15.7 -15.7 -15.7 ...
## $ mean.upper.a.matrix : num -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 ...
## $ mean.lower.a.matrix : num 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 ...
## $ mean.diag.a.matrix : num -0.325 -0.325 -0.325 -0.325 -0.325 -0.325 -0.325 -0.325 -0.325 -0.325 ...
## $ log.neq1.to.R1 : num -0.197 -0.197 -0.197 -0.197 -0.197 ...
## $ log.neq2.to.R2 : num 34.8 34.8 34.8 34.8 34.8 ...
## $ log.neq3.to.R3 : num 154 154 154 154 154 ...
## $ log.neq4.to.R4 : num 3006 3006 3006 3006 3006 ...
the distribution of recovery time
df.recovery.time.distribution <- select(mydata_stability,
new.motif.id,
recovery.4spp.nonscaled.0.01)
p.recovery.time.distribution <- ggplot(
df.recovery.time.distribution, aes(recovery.4spp.nonscaled.0.01)) +
geom_density(fill = 'coral',color='coral') +
facet_wrap(~new.motif.id, scales = "free_y",ncol = 2) +
theme_bw() +
theme(legend.position = 'none',
strip.text = element_text(size=12),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
axis.title.x = element_text(colour="black",size=14),
axis.title.y = element_text(colour="black",size=14),
axis.text.x = element_text(colour="black",size=9),
axis.text.y = element_text(colour="black",size=9),
panel.border = element_rect(colour = "black", fill=NA, size=0.5)) +
ylab('Density')+
xlab('Recovery time')
p.recovery.time.distributionAnalysis and illustration of results
Fig. 2
Plot Fig. 2a
# the temporal response pattern data
stability.example.foodweb <- select(
filter(mydata_stability,
new.motif.id ==1,
community.id==1,
p==0.2,
spectral_mimicry=='mimicry'),
recovery.4spp.nonscaled.0.01,
resistance.4spp.nonscaled.euclidean.value,
variability.4spp.nonscaled.community.level,
noise_rep_ID,
k)
colnames(stability.example.foodweb) <- c(
'Recovery time',
'Extent of change',
'Variability',
'noise_rep_ID',
'k')
stability.example.foodweb <- melt(
stability.example.foodweb,
id.vars = c('noise_rep_ID', 'k'))
p1 <- ggplot(stability.example.foodweb,
aes(x=k,y=value,group=noise_rep_ID))+
geom_point(size=1.2,color='grey')+
stat_summary(aes(y = value,group=1),
fun.y=mean,
size=1,
colour="black",
geom="line",
group=1)+
theme_bw() +
facet_wrap(~variable, scales = 'free_y',
strip.position="top", ncol = 1)+
xlab('Autocorrelation coefficient') +
ylab('Value')+
theme(legend.position = 'left',
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
legend.title = element_text(size=12),
legend.text = element_text(size=11),
strip.text = element_text(size=12),
axis.title.x=element_blank(),
axis.title.y = element_text(colour="black",size=14),
axis.text.x = element_text(colour="black",size=9),
axis.text.y = element_text(colour="black",size=9),
panel.border = element_rect(colour = "black", fill=NA, size=0.5))+
scale_x_continuous(breaks = c(-0.8,-0.4,0,0.4,0.8)) Plot Fig. 2b
# the temporal response pattern data
stability.cv <- ddply(
stability.example.foodweb,
~ k + variable,
summarize,
Uncertainty = sd(value)/mean(value))
p2 <- ggplot(stability.cv,
aes(x=k,y=Uncertainty))+
geom_point(size=2)+
geom_line(size=1)+
theme_bw() +
facet_wrap(~variable, scales = 'free_y',strip.position="top", ncol = 1)+
xlab('Autocorrelation coefficient') +
ylab('Uncertainty')+
theme(
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
legend.title = element_text(size=12),
legend.text = element_text(size=11),
strip.text = element_text(size=12),
axis.title.x=element_blank(),
axis.title.y = element_text(colour="black",size=14),
axis.text.x = element_text(colour="black",size=9),
axis.text.y = element_text(colour="black",size=9),
panel.border = element_rect(colour = "black", fill=NA, size=0.5))+
scale_x_continuous(breaks = c(-0.8,-0.4,0,0.4,0.8))combine Fig. 2a and Fig. 2b
p1<- p1+theme(plot.margin = unit(c(0.4,0.4,0.4,0.4), "cm"))
p2<- p2+theme(plot.margin = unit(c(0.4,0.4,0.4,0.4), "cm"))
p1.temporal.pattern <- p1 # for compiling figures
p2.temporal.pattern <- p2 # for compiling figures
# change the x axis label
p1.temporal.pattern <- p1.temporal.pattern+xlab('')
p2.temporal.pattern <- p2.temporal.pattern+xlab('')
# add the subplot indicator letter "a, b"" to subgraph
p1.temporal.pattern <- p1.temporal.pattern +
ggtitle('a') +
theme(plot.title = element_text(size=20, face = 'bold', hjust = -0.2, vjust = -0.15))
p2.temporal.pattern <- p2.temporal.pattern +
ggtitle('b') +
theme(plot.title = element_text(size=20, face = 'bold', hjust = -0.2, vjust = -0.15))
grid.arrange(
p1.temporal.pattern,
p2.temporal.pattern,nrow=1,
bottom=textGrob("Autocorrelation coefficient", gp=gpar(fontsize=14,font=8, face='bold')))Fig. 3
Calculate the general stability response data
The mean stability was calculated as the mean value of the stability values quantified for the 50 stochasticity replicates.
# the mean response pattern data
mydata_stability_mean <- ddply(na.omit(mydata_stability),
.(new.motif.id, community.id, k, p, spectral_mimicry),
summarise,
recovery.4spp.nonscaled.0.01.mean = mean(recovery.4spp.nonscaled.0.01),
resistance.4spp.nonscaled.euclidean.value.mean = mean(resistance.4spp.nonscaled.euclidean.value),
variability.4spp.nonscaled.community.level.mean = mean(variability.4spp.nonscaled.community.level) )
save(mydata_stability_mean, file='mydata_stability_mean.Rdata')Prepare graphics data
load(file='mydata_stability_mean.Rdata')
# the mean response pattern data
df.stability.mean <- select(
filter(mydata_stability_mean,
p==0.2,
spectral_mimicry=='mimicry'),
recovery.4spp.nonscaled.0.01.mean,
resistance.4spp.nonscaled.euclidean.value.mean,
variability.4spp.nonscaled.community.level.mean,
new.motif.id,
community.id,
k)
colnames(df.stability.mean) <- c(
'Recovery time','Resistance','Variability',
'new.motif.id','community.id','k')
df.stability.mean <- melt(
df.stability.mean,
id.vars = c('new.motif.id','community.id', 'k'))# define a function of quantifing cv
cv.f <- function(x)sd(x)/mean(x)
# select target variable
mydata_stability_selected <- select(
filter(mydata_stability,
p==0.2,
spectral_mimicry == 'mimicry'),
recovery.4spp.nonscaled.0.01,
resistance.4spp.nonscaled.euclidean.value,
variability.4spp.nonscaled.community.level,
new.motif.id,
community.id,
k)
# remove rows with NA; only a very small fraction of data contains NA
mydata_stability_selected <- na.omit(
mydata_stability_selected)
# obtain the cv
df.stability.cv <- ddply(mydata_stability_selected,
~new.motif.id+community.id+k,
summarise,
cv.recovery = cv.f(recovery.4spp.nonscaled.0.01),
cv.resistance = cv.f(resistance.4spp.nonscaled.euclidean.value),
cv.variability= cv.f(variability.4spp.nonscaled.community.level))
# to long format
df.stability.cv <- melt(
df.stability.cv,
id.vars = c('new.motif.id','community.id', 'k'))Plot Fig. 3
df1 <- filter(df.stability.mean,variable == 'Recovery time')
df2 <- filter(df.stability.mean,variable == 'Resistance')
df3 <- filter(df.stability.mean,variable == 'Variability')
df4 <- filter(df.stability.cv, variable == 'cv.recovery')
df5 <- filter(df.stability.cv, variable == 'cv.resistance')
df6 <- filter(df.stability.cv, variable == 'cv.variability')
df.plot.data <- list(df1,df2,df3,df4,df5,df6)
df.plot.list <- list()
plot.names <- c('Recovery\n time',
'Extent of\n change',
'Variability\n ',
'Recovery\n time',
'Extent of\n change',
'Variability\n ')
breaks.list<- list(
c(100,300,500),
c(0.10,0.30,0.50),
c(0.10,0.30,0.50),
c(0,0.50,1.00),
c(0,0.50,1.00),
c(0.1,0.2)
)
for(i in 1:6){
df.plot.list[[i]] <- ggplot(df.plot.data[[i]],
aes(x=k,y=value,group=community.id))+
geom_line(size=0.3,color='blue')+
theme_bw() +
facet_grid(new.motif.id~.)+
ggtitle(plot.names[i])+
theme( plot.title = element_text(size=14,face='bold'),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
legend.title = element_text(size=12),
legend.text = element_text(size=11),
strip.background = element_blank(),
strip.text = element_blank(),
axis.title.x = element_blank(),
axis.title.y = element_blank(),
axis.text.x = element_text(colour="black",size=9),
axis.text.y = element_text(colour="black",size=9),
panel.border = element_rect(colour = "black", fill=NA, size=0.5))+
scale_x_continuous(breaks = c(-0.8,0,0.8))+
scale_y_continuous(breaks = breaks.list[[i]])
}
# plot community-level change
G1<-grid.arrange(df.plot.list[[1]],
df.plot.list[[2]],
df.plot.list[[3]],nrow=1,
top=textGrob("a",x=0,hjust=0,gp=gpar(fontsize=25,font=8,face='bold')),
left=textGrob("Value",rot=90,
gp=gpar(fontsize=18,face='bold')))G2<-grid.arrange(df.plot.list[[4]],
df.plot.list[[5]],
df.plot.list[[6]],nrow=1,
top=textGrob("b",x=0,hjust=0,gp=gpar(fontsize=25,font=8,face='bold')),
left=textGrob("Uncertainty",rot=90,
gp=gpar(fontsize=18,face='bold')))grid.arrange(G1,G2,nrow=1,
bottom=textGrob("Autocorrelation coefficient",
gp=gpar(fontsize=18,face='bold')))Fig. 4
Prepare data for graph
load(file='mydata_stability_mean.Rdata')
# the temporal response pattern data
df.stability.mean <- select(
filter(mydata_stability_mean,
k==0.8,
spectral_mimicry=='mimicry'),
recovery.4spp.nonscaled.0.01.mean,
resistance.4spp.nonscaled.euclidean.value.mean,
variability.4spp.nonscaled.community.level.mean,
new.motif.id,
community.id,
p)
colnames(df.stability.mean) <- c(
'Recovery time','Resistance','Variability',
'new.motif.id','community.id','p')
df.stability.mean <- melt(
df.stability.mean,
id.vars = c('new.motif.id','community.id', 'p'))# define a function of quantifing cv
cv.f <- function(x)sd(x)/mean(x)
# select target variable
mydata_stability_selected <- select(
filter(mydata_stability,
k==0.8,
spectral_mimicry == 'mimicry'),
recovery.4spp.nonscaled.0.01,
resistance.4spp.nonscaled.euclidean.value,
variability.4spp.nonscaled.community.level,
new.motif.id,
community.id,
p)
# remove rows with NA; only a very small fraction of data contains NA
mydata_stability_selected <- na.omit(
mydata_stability_selected)
# obtain the cv
df.stability.cv <- ddply(mydata_stability_selected,
~new.motif.id+community.id+p,
summarise,
cv.recovery = cv.f(recovery.4spp.nonscaled.0.01),
cv.resistance = cv.f(resistance.4spp.nonscaled.euclidean.value),
cv.variability= cv.f(variability.4spp.nonscaled.community.level))
# to long format
df.stability.cv <- melt(
df.stability.cv,
id.vars = c('new.motif.id','community.id', 'p'))Plot Fig. 4
df1 <- filter(df.stability.mean,variable == 'Recovery time')
df2 <- filter(df.stability.mean,variable == 'Resistance')
df3 <- filter(df.stability.mean,variable == 'Variability')
df4 <- filter(df.stability.cv, variable == 'cv.recovery')
df5 <- filter(df.stability.cv, variable == 'cv.resistance')
df6 <- filter(df.stability.cv, variable == 'cv.variability')
df.plot.data <- list(df1,df2,df3,df4,df5,df6)
df.plot.list <- list()
plot.names <- c('Recovery\n time',
'Extent of\n change',
'Variability\n ',
'Recovery\n time',
'Extent of\n change',
'Variability\n ')
breaks.list<- list(
c(0,100,200),
c(0.10,0.35,0.60),
c(0.10,0.35,0.60),
c(0.4,0.70,1.00),
c(0.2,0.60,1.00),
c(0.05,0.15,0.25)
)
for(i in 1:6){
df.plot.list[[i]] <- ggplot(df.plot.data[[i]],
aes(x=p,y=value,group=community.id))+
geom_line(size=0.3,color='blue')+
theme_bw() +
facet_grid(new.motif.id~.)+
ggtitle(plot.names[i])+
theme( plot.title = element_text(size=14,face='bold'),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
legend.title = element_text(size=12),
legend.text = element_text(size=11),
strip.background = element_blank(),
strip.text = element_blank(),
axis.title.x = element_blank(),
axis.title.y = element_blank(),
axis.text.x = element_text(colour="black",size=9),
axis.text.y = element_text(colour="black",size=9),
panel.border = element_rect(colour = "black", fill=NA, size=0.5))+
scale_x_continuous(breaks = c(0.2,0.5,0.8))+
scale_y_continuous(breaks = breaks.list[[i]], limits = range(breaks.list[[i]]))
}
# plot community-level change
G1<-grid.arrange(df.plot.list[[1]],
df.plot.list[[2]],
df.plot.list[[3]],nrow=1,
top=textGrob("a",x=0,hjust=0,gp=gpar(fontsize=25,font=8,face='bold')),
left=textGrob("Value",rot=90,
gp=gpar(fontsize=18,face='bold')))G2<-grid.arrange(df.plot.list[[4]],
df.plot.list[[5]],
df.plot.list[[6]],nrow=1,
top=textGrob("b",x=0,hjust=0,gp=gpar(fontsize=25,font=8,face='bold')),
left=textGrob("Uncertainty",rot=90,
gp=gpar(fontsize=18,face='bold')))grid.arrange(G1,G2,nrow=1,
bottom=textGrob("Species response correlation",
gp=gpar(fontsize=18,face='bold')))Random Forest and Fig.5
Prepare data of the mean stability for RF
load(file='mydata_stability_mean.Rdata')
load(file='mydata_stability.Rdata')
stability.df <- filter(mydata_stability, noise_rep_ID == 1)
# combine the selected data with the mean stability data
mydata_stability_mean <- right_join(mydata_stability_mean, stability.df)
# add the information of the existence of the competition in a module
competition.f <- function(x) ifelse(x %in% c(3,4,5,7,13), 1, 0)
mydata_stability_mean <- mutate(mydata_stability_mean,
competition = competition.f(new.motif.id))
# select the variables of interest
target.variables <- c(
'k',
'p',
'spectral_mimicry',
'recovery.4spp.nonscaled.0.01.mean',
'resistance.4spp.nonscaled.euclidean.value.mean',
'variability.4spp.nonscaled.community.level.mean',
'no.trophic.levels',
'no.basal.species',
'no.omnivor.species.plant.criteria',
'no.omnivor.links.plant.criteria',
'connectance_include_competiton',
'competition',
'maxRel.value',
'max.Neq',
'min.Neq',
'min.abs.R',
'mean.upper.jacobian',
'mean.lower.jacobian',
'mean.diag.jacobian',
'mean.upper.a.matrix',
'mean.lower.a.matrix',
'mean.diag.a.matrix')
df <- select(mydata_stability_mean, one_of(target.variables))
df <- filter(df, spectral_mimicry=='mimicry')
df <- select(df, -spectral_mimicry)
# recovery dataset
df.recovery.time.mean <- select(df,
-resistance.4spp.nonscaled.euclidean.value.mean,
-variability.4spp.nonscaled.community.level.mean)
# resistance dataset
df.resistance.mean <- select(df,
-variability.4spp.nonscaled.community.level.mean,
-recovery.4spp.nonscaled.0.01.mean)
# varaibility dataset
df.variability.mean <- select(df,
-resistance.4spp.nonscaled.euclidean.value.mean,
-recovery.4spp.nonscaled.0.01.mean)
rm(df)
rm(target.variables)Prepare data of the original stability for RF
mydata_stability <- mutate(mydata_stability,
competition = competition.f(new.motif.id))
target.variables <- c(
'k',
'p',
'spectral_mimicry',
'recovery.4spp.nonscaled.0.01',
'resistance.4spp.nonscaled.euclidean.value',
'variability.4spp.nonscaled.community.level',
'no.trophic.levels',
'no.basal.species',
'no.omnivor.species.plant.criteria',
'no.omnivor.links.plant.criteria',
'connectance_include_competiton',
'competition',
'maxRel.value',
'max.Neq',
'min.Neq',
'min.abs.R',
'mean.upper.jacobian',
'mean.lower.jacobian',
'mean.diag.jacobian',
'mean.upper.a.matrix',
'mean.lower.a.matrix',
'mean.diag.a.matrix')
df <- select(mydata_stability, one_of(target.variables))
df <- filter(df, spectral_mimicry=='mimicry')
df <- select(df, -spectral_mimicry)
# recovery dataset
df.recovery.time <- select(df,
-resistance.4spp.nonscaled.euclidean.value,
-variability.4spp.nonscaled.community.level)
# resistance dataset
df.resistance <- select(df,
-variability.4spp.nonscaled.community.level,
-recovery.4spp.nonscaled.0.01)
# varaibility dataset
df.variability <- select(df,
-resistance.4spp.nonscaled.euclidean.value,
-recovery.4spp.nonscaled.0.01)
df.recovery.time <- na.omit(df.recovery.time)
df.resistance <- na.omit(df.resistance)
df.variability <- na.omit(df.variability)
rm(df)RF on mean stability
mymod.using.mean.stability <- list()
nn.vec <- c(1:10,20,30,40,50,seq(100,500,100))
set.seed(1234)
for(i in 1:length(nn.vec)){
nn <- nn.vec[i]
mymod.recovery.time.mean <- ranger(
formula = recovery.4spp.nonscaled.0.01.mean ~ .,
data = df.recovery.time.mean,
num.trees = nn,
seed = 1234,
importance = "permutation", scale.permutation.importance = TRUE)
mymod.resistance.mean <- ranger(
formula = resistance.4spp.nonscaled.euclidean.value.mean ~ .,
data = df.resistance.mean,
num.trees = nn,
seed = 1234,
importance = "permutation", scale.permutation.importance = TRUE)
mymod.variability.mean <- ranger(
formula = variability.4spp.nonscaled.community.level.mean ~ .,
data = df.variability.mean,
num.trees = nn,
seed = 1234,
importance = "permutation", scale.permutation.importance = TRUE)
result <- list(mymod.recovery.time.mean,
mymod.resistance.mean,
mymod.variability.mean)
mymod.using.mean.stability[[i]] <- result
}
save(mymod.using.mean.stability, file = 'mymod.using.mean.stability.Rdata')RF on the original stability
- this coding take up very high memory and cpu
- works well with working station or hpc with high configuration
mymod.using.original.stability <- list()
nn.vec <- c(1:10,20,30,40,50,seq(100,500,100))
set.seed(1234)
for(i in 1:length(nn.vec)){
nn <- nn.vec[i]
mymod.recovery.time.ori <- ranger(
formula = recovery.4spp.nonscaled.0.01 ~ .,
data = df.recovery.time,
num.trees = nn,
seed = 1234,
importance = "permutation", scale.permutation.importance = TRUE)
mymod.resistance.ori <- ranger(
formula = resistance.4spp.nonscaled.euclidean.value ~ .,
data = df.resistance,
num.trees = nn,
seed = 1234,
importance = "permutation",
scale.permutation.importance = TRUE)
mymod.variability.ori <- ranger(
formula = variability.4spp.nonscaled.community.level ~ .,
data = df.variability,
num.trees = nn,
seed = 1234,
importance = "permutation",
scale.permutation.importance = TRUE)
result <- list(mymod.recovery.time.ori,
mymod.resistance.ori,
mymod.variability.ori)
mymod.using.original.stability[[i]] <- result
}
save(mymod.using.original.stability, file='mymod.using.original.stability.Rdata')prepare graph data
load(file='mymod.using.original.stability.Rdata')
load(file='mymod.using.mean.stability.Rdata')
# select key data from the big data
selected.original <- list()
selected.mean <- list()
for(i in 1:19){
sub.selected.original <- list()
sub.selected.mean <- list()
for(j in 1:3){
result.original <- mymod.using.original.stability[[i]][[j]]
result.mean <- mymod.using.mean.stability[[i]][[j]]
sub.sub.selected.original <- list(
result.original$variable.importance,
result.original$prediction.error,
result.original$r.squared)
sub.sub.selected.mean <- list(
result.mean$variable.importance,
result.mean$prediction.error,
result.mean$r.squared)
sub.selected.original[[j]] <- sub.sub.selected.original
sub.selected.mean[[j]] <- sub.sub.selected.mean
}
selected.original[[i]] <- sub.selected.original
selected.mean[[i]] <- sub.selected.mean
}
df.mod.original <- unlist(selected.original)# change data to list
df.mod.mean <- unlist(selected.mean)
df.mod.original.m <- as.data.frame(
matrix(df.mod.original, ncol=20, byrow=TRUE))# change data to dataframe
df.mod.mean.m <- as.data.frame(matrix(df.mod.mean, ncol=20, byrow=TRUE))
df.mod.original <- mutate(df.mod.original.m,
datapattern = rep('original',nrow(df.mod.original.m)))
df.mod.mean <- mutate(df.mod.mean.m,
datapattern = rep('mean',nrow(df.mod.mean.m)))
# factors of stability components and number of trees
nn.vec <- c(1:10,20,30,40,50,seq(100,500,100))
factors.combine <- expand.grid(c('recovery.time','resistance','variability'),nn.vec)
factors.combine <- as.data.frame(factors.combine)
colnames(factors.combine) <-
c('stability_components','tree.numbers')
# combine model results of orginal and mean stability
df.mod.original <- cbind(df.mod.original, factors.combine)
df.mod.mean <- cbind(df.mod.mean, factors.combine)
df.mod <- rbind(df.mod.original, df.mod.mean)
save(df.mod, file = 'df.mod.Rdata')prepare first subgraph (predictability)
load('df.mod.Rdata')
df.mod.rsquared <- df.mod[,20:23]
colnames(df.mod.rsquared)<-c(
'rsquared','datasource','stability_components','tree.numbers')
to_string <- as_labeller(c(`recovery.time` = "Recovery time",
`resistance` = "Extent of change",
`variability` = "Variability"))
df.mod.rsquared <- mutate(df.mod.rsquared,
pattern = rep(c('Specific\ntemporal\nresponse','General\nresponse\n'),each=57))
p1 <- ggplot(df.mod.rsquared,
aes(x=tree.numbers,y=rsquared,
color=pattern,linetype=pattern))+
geom_line(size=1.5)+
theme_bw() +
facet_wrap(~stability_components,labeller = to_string)+
xlab('Number of trees in the random forest model') +
ylab(expression(paste("Pseudo-", italic(R),sep='')^2))+
ggtitle('a')+
theme(plot.title = element_text(size=20,face='bold',hjust = -0.45),
legend.position = 'left',
legend.title = element_blank(),
legend.text = element_text(size=12),
strip.text = element_text(size=12),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
axis.title.x = element_text(colour="black",size=14),
axis.title.y = element_text(colour="black",size=14),
axis.text.x = element_text(colour="black",size=9),
axis.text.y = element_text(colour="black",size=9),
panel.border = element_rect(colour = "black", fill=NA, size=0.5)) +
scale_colour_brewer(palette='Dark2')prepare second subgraph (variable importance)
#define a function 1) change negative values to zero and then 2) scale x by its sum
cnvtzas.f<-function(x){
x[x<0] <- 0
x <- x/sum(x)
return(x)
}
namess <- names(selected.original[[19]][[1]][[1]])
importance.values.original.stability <- data.frame(
recovery.time = cnvtzas.f(selected.original[[19]][[1]][[1]]),
resistance = cnvtzas.f(selected.original[[19]][[2]][[1]]),
variability = cnvtzas.f(selected.original[[19]][[3]][[1]]),
data.source = rep('original',18),
Predictors = namess)
importance.values.mean.stability <- data.frame(
recovery.time = cnvtzas.f(selected.mean[[19]][[1]][[1]]),
resistance = cnvtzas.f(selected.mean[[19]][[2]][[1]]),
variability = cnvtzas.f(selected.mean[[19]][[3]][[1]]),
data.source = rep('mean',18),
Predictors = namess)
df.importance <- rbind(
importance.values.mean.stability,
importance.values.original.stability)
df.importance <- melt(df.importance, id.vars = c('data.source','Predictors'))
#make a order of the predictors at the axis
predictor.name.orders<-rev(
c("k","p",
"maxRel.value","max.Neq","min.Neq","min.abs.R",
"mean.upper.jacobian","mean.lower.jacobian","mean.diag.jacobian",
"mean.upper.a.matrix","mean.lower.a.matrix","mean.diag.a.matrix",
"no.trophic.levels","no.basal.species",
"no.omnivor.species.plant.criteria","no.omnivor.links.plant.criteria",
"connectance_include_competiton","competition"))
to_string <- as_labeller(c(`recovery.time` = "Recovery time",
`resistance` = "Extent of change",
`variability` = "Variability",
`mean` = "General response",
`original` = "Specific temporal response"))
p2 <- ggplot(df.importance,
aes(x=Predictors,y=value,fill=data.source))+
geom_bar(stat="identity")+
theme_bw() +
coord_flip()+
scale_fill_brewer(palette='Dark2')+
facet_grid(data.source~variable, labeller = to_string)+
ggtitle('b')+
xlab('Explanatory variables') +
ylab('Relative importance') +
theme(
plot.title = element_text(size=20,face='bold',hjust = -0.45),
legend.title=element_blank(), legend.position = 'none',
legend.text = element_text(size = 13),
strip.text = element_text(size=12),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
axis.title.x = element_text(colour="black",size=14),
axis.title.y = element_text(colour="black",size=14),
axis.text.x = element_text(colour="black",size=9),
axis.text.y = element_text(colour="black",size=9),
panel.border = element_rect(colour = "black", fill=NA, size=0.5))+
scale_x_discrete(limit=predictor.name.orders,
labels=c("competition",
"connectance",
"n.omnivorous.links",
"n.omnivorous.species",
"n.basal.species",
"n.trophic.levels",
"mean.diag.A",
"mean.lower.tri.A",
"mean.upper.tri.A",
"mean.diag.J",
"mean.lower.tri.J",
"mean.upper.tri.J",
"min.R",
"min.Neq",
"max.Neq",
"max.real.eigen.J" ,
'correlation.species',
"autocorrelation"))plot the final graph
p1<- p1+theme(plot.margin = unit(c(0.5,1.55,0.5,1), "cm"))
p2<- p2+theme(plot.margin = unit(c(0,1,0.1,1.55), "cm"))
p1.random.forest <- p1
p2.random.forest <- p2
layout.pattern<- matrix(
c(1,1,1,1,1,1,
2,2,2,2,2,2,2,2,2,2,2,2),
nrow=9,byrow=TRUE)
grid.arrange(
grobs=list(p1,p2),
layout_matrix = layout.pattern)The rest supplementary figures on sensitivity analysis
So far, the codes above have produced the figures of the main text and the supplementary figures for the character of the food-webs, the environmental stochasticity. In this section, I will provide the codes for the three supplementary figures on sensitivity analysis. I will only present the figure codes here, as the simulation task of the sensitivity analysis is very similar to the simulation coding I have provided above.
Sensitivity analysis on stochasticity magnitude
load(file='mydata_stability.Rdata')
load('df.stability.0.01.Rdata')
stability.example.foodweb <- select(
filter(mydata_stability,
new.motif.id ==1,
community.id==1,
p==0.2,
spectral_mimicry=='mimicry'),
noise_rep_ID,
k,
noise.id)
stability.example.foodweb <- left_join(stability.example.foodweb,
df.stability.0.01)
stability.example.foodweb <- select(stability.example.foodweb,
recovery.4spp.nonscaled.0.01,
resistance.4spp.nonscaled.euclidean.value,
variability.4spp.nonscaled.community.level,
noise_rep_ID,
k)
colnames(stability.example.foodweb) <- c(
'Recovery time',
'Extent of change',
'Variability',
'noise_rep_ID',
'k')
stability.example.foodweb <- melt(
stability.example.foodweb,
id.vars = c('noise_rep_ID', 'k'))Plot a
p1 <- ggplot(stability.example.foodweb,
aes(x=k,y=value,group=noise_rep_ID))+
geom_point(size=1.2,color='grey')+
stat_summary(aes(y = value,group=1),
fun.y=mean,
size=1,
colour="black",
geom="line",
group=1)+
theme_bw() +
facet_wrap(~variable, scales = 'free_y',
strip.position="top", ncol = 1)+
xlab('Autocorrelation coefficient') +
ylab('Value')+
theme(legend.position = 'left',
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
legend.title = element_text(size=12),
legend.text = element_text(size=11),
strip.text = element_text(size=12),
axis.title.x=element_blank(),
axis.title.y = element_text(colour="black",size=14),
axis.text.x = element_text(colour="black",size=9),
axis.text.y = element_text(colour="black",size=9),
panel.border = element_rect(colour = "black", fill=NA, size=0.5))+
scale_x_continuous(breaks = c(-0.8,-0.4,0,0.4,0.8)) Plot b
# the temporal response pattern data
stability.cv <- ddply(
stability.example.foodweb,
~ k + variable,
summarize,
Uncertainty = sd(value)/mean(value))
p2 <- ggplot(stability.cv,
aes(x=k,y=Uncertainty))+
geom_point(size=2)+
geom_line(size=1)+
theme_bw() +
facet_wrap(~variable, scales = 'free_y',strip.position="top", ncol = 1)+
xlab('Autocorrelation coefficient') +
ylab('Uncertainty')+
theme(
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
legend.title = element_text(size=12),
legend.text = element_text(size=11),
strip.text = element_text(size=12),
axis.title.x=element_blank(),
axis.title.y = element_text(colour="black",size=14),
axis.text.x = element_text(colour="black",size=9),
axis.text.y = element_text(colour="black",size=9),
panel.border = element_rect(colour = "black", fill=NA, size=0.5))+
scale_x_continuous(breaks = c(-0.8,-0.4,0,0.4,0.8))combine
p1<- p1+theme(plot.margin = unit(c(0.4,0.4,0.4,0.4), "cm"))
p2<- p2+theme(plot.margin = unit(c(0.4,0.4,0.4,0.4), "cm"))
p1.temporal.pattern <- p1 # for compiling figures
p2.temporal.pattern <- p2 # for compiling figures
# change the x axis label
p1.temporal.pattern <- p1.temporal.pattern+xlab('')
p2.temporal.pattern <- p2.temporal.pattern+xlab('')
# add the subplot indicator letter "a, b"" to subgraph
p1.temporal.pattern <- p1.temporal.pattern +
ggtitle('a') +
theme(plot.title = element_text(size=20, face = 'bold', hjust = -0.2, vjust = -0.15))
p2.temporal.pattern <- p2.temporal.pattern +
ggtitle('b') +
theme(plot.title = element_text(size=20, face = 'bold', hjust = -0.2, vjust = -0.15))
grid.arrange(
p1.temporal.pattern,
p2.temporal.pattern,nrow=1,
bottom=textGrob("Autocorrelation coefficient", gp=gpar(fontsize=14,font=8, face='bold')))Sensitivity analysis on species identity
load(file='sensitivity.result.species.identity.Rdata') # object name: df.stability.mean.cv
df1 <- filter(df.stability.mean.cv, variable == 'mean.recovery')
df2 <- filter(df.stability.mean.cv, variable == 'mean.resistance')
df3 <- filter(df.stability.mean.cv, variable == 'cv.recovery')
df4 <- filter(df.stability.mean.cv, variable == 'cv.resistance')
df.plot.data <- list(df1,df2,df3,df4)
df.plot.list <- list()
plot.names <- c('Recovery\n time',
'Extent of\n change',
'Recovery\n time',
'Extent of\n change')
breaks.list<- list(
c(100,300,500),
c(0.10,0.30,0.50),
c(0,0.50,1.00),
c(0,0.50,1.00)
)
for(i in 1:4){
df.plot.list[[i]] <- ggplot(df.plot.data[[i]],
aes(x=k,y=value,group=community.id))+
geom_line(size=0.3,color='blue')+
theme_bw() +
facet_grid(species_perturbed~.)+
ggtitle(plot.names[i])+
theme( plot.title = element_text(size=14,face='bold',hjust = 0.5),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
legend.title = element_text(size=12),
legend.text = element_text(size=11),
strip.background = element_blank(),
strip.text = element_blank(),
axis.title.x = element_blank(),
axis.title.y = element_blank(),
axis.text.x = element_text(colour="black",size=9),
axis.text.y = element_text(colour="black",size=9),
panel.border = element_rect(colour = "black", fill=NA, size=0.5))+
scale_x_continuous(breaks = c(-0.8,0,0.8))+
scale_y_continuous(breaks = breaks.list[[i]])
}
# plot community-level change
G1<-grid.arrange(df.plot.list[[1]],
df.plot.list[[2]],nrow=1,
top=textGrob("a",x=0,hjust=-.2,gp=gpar(fontsize=25,face='bold')),
left=textGrob("Value",rot=90,
gp=gpar(fontsize=18,face='bold')))G2<-grid.arrange(df.plot.list[[3]],
df.plot.list[[4]],nrow=1,
top=textGrob("b",x=0,hjust=-.2,gp=gpar(fontsize=25,face='bold')),
left=textGrob("Uncertainty",rot=90,
gp=gpar(fontsize=18,face='bold')))grid.arrange(G1,G2,nrow=1,
bottom=textGrob("Autocorrelation coefficient",
gp=gpar(fontsize=18,face='bold')))Sensitivity analysis on time range of variability
The food-web modules in the supplementary figure of the paper wad added using Adobe Illustrator
load(file='sensitivity.result.time.range.variability.Rdata') # object name: df.stability.mean.cv
df1 <- filter(df.stability.mean.cv,variable == 'mean.recovery')
df2 <- filter(df.stability.mean.cv,variable == 'mean.resistance')
df3 <- filter(df.stability.mean.cv,variable == 'mean.variability')
df4 <- filter(df.stability.mean.cv,variable == 'mean.new.variability')
df5 <- filter(df.stability.mean.cv, variable == 'cv.recovery')
df6 <- filter(df.stability.mean.cv, variable == 'cv.resistance')
df7 <- filter(df.stability.mean.cv, variable == 'cv.variability')
df8 <- filter(df.stability.mean.cv, variable == 'cv.new.variability')
df.plot.data <- list(df1,df2,df3,df4,df5,df6,df7,df8)
df.plot.list <- list()
plot.names <- c(
'Recovery time',
'Extent of change',
'\nVariability',
'Transient\nvariability',
'Recovery time',
'Extent of change',
'\nVariability',
'Transient\nvariability'
)
breaks.list<- list(
c(100,300,500),
c(0,0.25,0.50),
c(0,0.25,0.50),
c(0,0.25,0.50),
c(0,0.50,1.00),
c(0,0.50,1.00),
c(0,0.25,0.5),
c(0,0.3,0.6)
)
for(i in 1:8){
df.plot.list[[i]] <- ggplot(df.plot.data[[i]],
aes(x=k,y=value,group=community.id))+
geom_line(size=0.3,color='blue')+
theme_bw() +
facet_grid(new.motif.id~.)+
ggtitle(plot.names[i])+
theme( plot.title = element_text(size=14,face='bold',hjust = 0.5),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
legend.title = element_text(size=12),
legend.text = element_text(size=11),
strip.background = element_blank(),
strip.text = element_blank(),
axis.title.x = element_blank(),
axis.title.y = element_blank(),
axis.text.x = element_text(colour="black",size=7),
axis.text.y = element_text(colour="black",size=7),
panel.border = element_rect(colour = "black", fill=NA, size=0.5))+
scale_y_continuous(breaks = breaks.list[[i]],limits = range(breaks.list[[i]]))+
scale_x_continuous(breaks = c(-0.8,0,0.8))
}
df.plot.list.central.pattern <- df.plot.list
p1<-df.plot.list[[1]]
p2<-df.plot.list[[2]]
p3<-df.plot.list[[3]]
p4<-df.plot.list[[4]]
p5<-df.plot.list[[5]]
p6<-df.plot.list[[6]]
p7<-df.plot.list[[7]]
p8<-df.plot.list[[8]]
G1 <- grid.arrange(p3,p4,nrow=1,
top=textGrob("a",x=0,hjust=-0.4,gp=gpar(fontsize=25,face='bold')),
left=textGrob("Value",rot=90,
gp=gpar(fontsize=18, face='bold')))G2 <- grid.arrange(p7,p8,nrow=1,
top=textGrob("b",x=0,hjust=-0.4,gp=gpar(fontsize=25,face='bold')),
left=textGrob("Uncertainty",rot=90,
gp=gpar(fontsize=18,face='bold'))) grid.arrange(G1,G2,nrow=1,
bottom=textGrob("Autocorrelation coefficient",
gp=gpar(fontsize=18, face='bold')))